# Multi-task Evaluation

Stablelm Zephyr 3b GGUF
Other
StableLM Zephyr 3B is a 3-billion-parameter instruction-tuned model trained on public datasets, synthetic datasets, and Direct Preference Optimization (DPO), delivering excellent performance.
Large Language Model English
S
brittlewis12
51
1
Mini Gte
Apache-2.0
A lightweight sentence embedding model based on DistilBERT, suitable for various text processing tasks
Text Embedding English
M
prdev
1,240
4
Conan Embedding V1 Q4 K M GGUF
Conan-embedding-v1 is a Chinese text embedding model developed by the Tencent BAC team, focusing on semantic representation and similarity calculation for Chinese text.
Text Embedding Chinese
C
KenLi315
48
2
Stella En 400M V5
MIT
Stella 400M v5 is an English text embedding model that excels in multiple text classification and retrieval tasks.
Large Language Model Transformers Other
S
billatsectorflow
7,630
3
Gte Reranker Modernbert Base
Apache-2.0
An English text re-ranking model based on the ModernBERT pre-training architecture, developed by Alibaba's Tongyi Lab, supporting long-text processing up to 8192 tokens.
Text Embedding Transformers English
G
Alibaba-NLP
17.69k
56
Granite Embedding 30m English
Apache-2.0
IBM Granite Embedding 30M English is a transformer-based English text embedding model developed and released by IBM.
Text Embedding Transformers English
G
ibm-granite
78.53k
10
Gte Multilingual Base
Apache-2.0
GTE Multilingual Base is a multilingual sentence embedding model supporting over 50 languages, suitable for tasks like sentence similarity calculation.
Text Embedding Transformers Supports Multiple Languages
G
Alibaba-NLP
1.2M
246
Yinka
This model has been evaluated on multiple tasks in the Chinese Text Embedding Benchmark (MTEB), including text similarity, classification, clustering, and retrieval tasks.
Large Language Model Transformers
Y
Classical
388
18
Venusaur
MIT
Venusaur is a sentence embedding model developed based on the Mihaiii/Bulbasaur foundation model, focusing on sentence similarity and feature extraction tasks.
Text Embedding
V
Mihaiii
290
3
Snowflake Arctic Embed L
Apache-2.0
Snowflake Arctic Embed L is a model focused on sentence similarity and feature extraction, suitable for various natural language processing tasks.
Text Embedding Transformers
S
Snowflake
50.58k
93
Snowflake Arctic Embed M
Apache-2.0
Snowflake Arctic Embed M is a sentence transformer model focused on sentence similarity tasks, capable of efficiently extracting text features and calculating similarity between sentences.
Text Embedding Transformers
S
Snowflake
722.08k
154
Eeve Dpo V3
Apache-2.0
Korean instruction-optimized model based on EEVE-Korean-Instruct-10.8B-v1.0, trained using Direct Preference Optimization (DPO) method
Large Language Model Transformers
E
ENERGY-DRINK-LOVE
1,803
1
Laser Dolphin Mixtral 2x7b Dpo
Apache-2.0
A medium-scale Mixture of Experts (MoE) implementation based on Dolphin-2.6-Mistral-7B-DPO-Laser, with an average performance improvement of approximately 1 point in evaluations
Large Language Model Transformers
L
macadeliccc
133
57
Tao 8k
Apache-2.0
tao-8k-origin is a model focused on sentence similarity calculation, supporting multiple similarity measurement methods and excelling in various Chinese evaluation datasets.
Text Embedding Chinese
T
Amu
639
46
Stella Large Zh V2
stella-large-zh-v2 is a Chinese model focused on sentence similarity calculation, supporting various semantic text similarity tasks and text classification tasks.
Text Embedding
S
infgrad
259
32
Piccolo Base Zh
Piccolo is a Chinese base model specializing in various natural language processing tasks such as semantic text similarity (STS), classification, clustering, and retrieval.
Text Embedding Transformers
P
sensenova
303
35
Open Instruct Stanford Alpaca 7b
A 7B-parameter LLaMa model fine-tuned on the Stanford Alpaca dataset, focusing on open-source instruction tuning
Large Language Model Transformers English
O
allenai
220
10
Polyglot Ko 5.8b
Apache-2.0
Polyglot-Ko-5.8B is a large-scale Korean autoregressive language model developed by EleutherAI's multilingual team, with 5.8 billion parameters trained on 863GB of Korean data.
Large Language Model Transformers Korean
P
EleutherAI
1,148
65
Sbert Chinese General V1
Apache-2.0
A general-purpose Chinese sentence embedding model for calculating sentence similarity and semantic search tasks.
Text Embedding Transformers Chinese
S
DMetaSoul
388
6
Bert Base
A Korean-pretrained BERT model developed by the KLUE benchmark team, supporting various Korean understanding tasks
Large Language Model Transformers Korean
B
klue
129.68k
47
Roberta Ko Small
Apache-2.0
A compact Korean RoBERTa model trained under the LASSL framework, suitable for various Korean natural language processing tasks.
Large Language Model Transformers Korean
R
lassl
17
2
Bert Base Romanian Cased V1
MIT
This is a BERT base model for Romanian, case-sensitive, trained on a 15GB corpus.
Large Language Model Other
B
dumitrescustefan
6,466
15
Fairseq Dense 2.7B
A converted version of the 2.7 billion parameter dense model based on the paper 'Efficient Large-scale Language Modeling with Mixture of Experts' by Artetxe et al.
Large Language Model Transformers English
F
KoboldAI
18
3
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase